Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
磁共振图像的降解有益于提高低信噪比图像的质量。最近,使用深层神经网络进行DENOSING表现出了令人鼓舞的结果。但是,这些网络大多数都利用监督学习,这需要大量的噪声和清洁图像对的培训图像。获得训练图像,尤其是干净的图像,既昂贵又耗时。因此,已经开发了仅需要成对噪声浪费图像的噪声2Noise(N2N)之类的方法来减轻获得训练数据集的负担。在这项研究中,我们提出了一种新的自我监督的denoising方法Coil2Coil(C2C),该方法不需要获取干净的图像或配对的噪声浪费图像进行训练。取而代之的是,该方法利用了从分阶段阵列线圈中的多通道数据来生成训练图像。首先,它将多通道线圈图像分为两个图像,一个用于输入,另一个用于标签。然后,它们被处理以施加噪声独立性和敏感性归一化,以便它们可用于N2N的训练图像。为了推断,该方法输入了一个线圈组合的图像(例如DICOM图像),从而允许该方法的广泛应用。当使用合成噪声添加的图像进行评估时,C2C对几种自我监督方法显示了最佳性能,从而报告了与监督方法的可比结果。在测试DICOM图像时,C2C成功地将真实噪声降低,而没有显示误差图中的结构依赖性残差。由于不需要对清洁或配对图像进行额外扫描的显着优势,因此可以轻松地用于各种临床应用。
translated by 谷歌翻译
自从引进原始伯特(即,基础BERT)以来,研究人员通过利用转让学习的好处,开发了各种定制的伯特模型,并通过利用转移学习的好处来提高特定领域和任务的性能。由于数学文本的性质,这通常使用域特定的词汇以及方程和数学符号,我们对数学的新BERT模型的开发对于许多数学下游任务有用。在这个资源论文中,我们介绍了我们的多体制努力(即,美国的两个学习平台和三个学术机构)对此需求:Mathbert,通过在大型数学语料库上预先培训基础伯爵模型来创建的模型预先幼儿园(Pre-K),高中,大学毕业生水平数学内容。此外,我们选择了三个通常用于数学教育的一般NLP任务:知识组件预测,自动分级开放式Q&A,以及知识追踪,以展示Mathbert对底座的优越性。我们的实验表明,Mathbert以此任务的2-8%达到了1.2-22%,碱基贝尔以前最佳方法。此外,我们建立了一个数学特定的词汇“Mathvocab”,用Mathbert训练。我们发现Mathbert预先接受过的“Mathvocab”优于Mathbert培训的底座伯特词汇(即'Origvocab')。 Mathbert目前正在参加倾斜平台采用:Stride,Inc,商业教育资源提供商和Accortments.org,是一个免费在线教育平台。我们发布Mathbert以获取公共用途:https://github.com/tbs17/mathbert。
translated by 谷歌翻译
Cross-modal retrieval across image and text modalities is a challenging task due to its inherent ambiguity: An image often exhibits various situations, and a caption can be coupled with diverse images. Set-based embedding has been studied as a solution to this problem. It seeks to encode a sample into a set of different embedding vectors that capture different semantics of the sample. In this paper, we present a novel set-based embedding method, which is distinct from previous work in two aspects. First, we present a new similarity function called smooth-Chamfer similarity, which is designed to alleviate the side effects of existing similarity functions for set-based embedding. Second, we propose a novel set prediction module to produce a set of embedding vectors that effectively captures diverse semantics of input by the slot attention mechanism. Our method is evaluated on the COCO and Flickr30K datasets across different visual backbones, where it outperforms existing methods including ones that demand substantially larger computation at inference.
translated by 谷歌翻译
对象操作是服务机器人的关键能力,但是由于某些原因(例如样品效率),很难通过增强学习解决。在本文中,为了解决此物体操纵,我们提出了一个新颖的框架AP-NPQL(使用动作原始素的非参数Q学习),可以通过非参数通过非参数来有效地使用视觉输入和稀疏奖励来解决对象操纵对象操纵的强化学习和适当行为的政策。我们评估了拟议的AP-NPQL的效率和性能,用于在模拟上执行四个对象操纵任务(推板,堆叠框,翻转杯以及采摘和放置板),事实证明,我们的AP-NPQL优于状态 - 基于参数政策和行为的算法就学习时间和任务成功率而言。我们还成功地将板接地任务的学识渊博的政策转移到了SIM到现实的方式。
translated by 谷歌翻译
本文提出了一种步态分解(G.D),一种数学上分解蛇运动的方法,以及步态参数梯度(GPG),一种优化分解步态参数的方法。G.D是一种方法,可以在数学附下,并在产生蛇机器人的运动时使用曲线函数来产生运动的蛇步态。通过这种方法,蛇机器人的步态可以直观地分为矩阵,以及灵活地调节步态生成所需的曲线函数的参数。这可以解决参数调整的问题,这就是为蛇机器人难以实际使用的原因,很难。因此,如果该G.D应用于蛇机器人,则可以使用少数参数生成各种GA,因此蛇机器人可以在许多领域中使用。我们还实现了GPG算法来优化步态曲线功能,以及通过G.D定义蛇机器人的步态。
translated by 谷歌翻译
目前,移动机器人正在迅速发展,并在工业中寻找许多应用。然而,仍然存在与其实际使用相关的一些问题,例如对昂贵的硬件及其高功耗水平的需要。在本研究中,我们提出了一种导航系统,该导航系统可在具有RGB-D相机的低端计算机上操作,以及用于操作集成自动驱动系统的移动机器人平台。建议的系统不需要Lidars或GPU。我们的原始深度图像接地分割方法提取用于低体移动机器人的安全驾驶的遍历图。它旨在保证具有集成的SLAM,全局路径规划和运动规划的低成本现成单板计算机上的实时性能。我们使用Traversability Map应用基于规则的基于学习的导航策略。同时运行传感器数据处理和其他自主驾驶功能,我们的导航策略以18Hz的刷新率为控制命令而迅速执行,而其他系统则具有较慢的刷新率。我们的方法在有限的计算资源中优于当前最先进的导航方法,如3D模拟测试所示。此外,我们通过在室内环境中成功的自动驾驶来展示移动机器人系统的适用性。我们的整个作品包括硬件和软件在开源许可(https://github.com/shinkansan/2019-ugrp-doom)下发布。我们的详细视频是https://youtu.be/mf3iufuhppm提供的。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
Cellular automata (CA) captivate researchers due to teh emergent, complex individualized behavior that simple global rules of interaction enact. Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images. This new branch of CA is called neural cellular automata [1]. The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines. We place many different convolutional neural networks in a grid. Each conv net cell outputs a prediction of what the next state will be, and minimizes predictive error. Cells received their neighbors' colors and fitnesses as input. Each cell's fitness score described how accurate its predictions were. Cells could also move to explore their environment and some stochasticity was applied to movement.
translated by 谷歌翻译